Current Issue : October-December Volume : 2022 Issue Number : 4 Articles : 5 Articles
The incidence of liver cancer (hepatocellular carcinoma; HCC) is rising and with poor clinical outcome expected, a more accurate judgment of tumor tissues and adjacent nontumor tissues is necessary. The aim of this study was to construct a diagnostic model based on random forest (RF) and artificial neural network (ANN). It can be used to aid in the identification of diseased tissue such as cancerous tissue, for HCC clinical diagnosis and surgical guidance. GSE36376 and GSE121248 from Gene Expression Omnibus (GEO) were used as training sets in this investigation. R package “limma” and WGCNA were used to filter the training set for statistically significant (p < 0.05) differential genes. To better understand the biological function and characteristics, R software was used to perform GO and KEGG enrichment analyses. To pick out and further understand the key genes, we performed PPI analysis and random forest tree analysis. Next, we built the ANN to predict training sets and validation set (GSE84402), and ROC curve was plotted to calculate area under curve (AUC). Then immune cell infiltration indicated difference of immune cell subsets between control and case groups. Finally, the survival analysis of key genes was also carried out based on data in TCGA database. Based on the expression of these 9 genes, we built the artificial neural network (ANN) and the accuracy of the final models was assessed with an ROC curve. The areas under the ROC curve were 0.984 (95% CI 0.972–0.993) in training sets. Its predictive capability was further assessed using the validation set. And the areas under the ROC curve were 0.929 (95% CI 0.786–1.000). In summary, this method effectively classifies hepatocellular carcinoma tissues and the corresponding noncancerous tissues and provides reasonable new ideas for the early diagnosis of liver cancer in the future....
Electroencephalography (EEG)-based measurements of fine tactile sensation produce large amounts of data, with high costs for manual evaluation. In this study, an EEG-based machine-learning (ML) model with support vector machine (SVM) was established to automatically evaluate poststroke impairments in fine tactile sensation. Stroke survivors (n = 12, stroke group) and unimpaired participants (n = 15, control group) received stimulations with cotton, nylon, and wool fabrics to the different upper limbs of a stroke participant and the dominant side of the control. The average and maximal values of relative spectral power (RSP) of EEG in the stimulations were used as the inputs to the SVM-ML model, which was first optimized for classification accuracies for different limb sides through hyperparameter selection (γ, C) in radial basis function (RBF) kernel and cross-validation during cotton stimulation. Model generalization was investigated by comparing accuracies during stimulations with different fabrics to different limbs. The highest accuracies were achieved with (γ = 21, C = 23) for the RBF kernel (76.8%) and six-fold cross-validation (75.4%), respectively, in the gamma band for cotton stimulation; these were selected as optimal parameters for the SVM-ML model. In model generalization, significant differences in the post-stroke fabric stimulation accuracies were shifted to higher (beta/gamma) bands. The EEG-based SVM-ML model generated results similar to manual evaluation of cortical responses to fabric stimulations; this may aid automatic assessments of post-stroke fine tactile sensations....
Background: TMPRSS2-ERG gene rearrangement, the most common E26 transformation specific (ETS) gene fusion within prostate cancer, is known to contribute to the pathogenesis of this disease and carries diagnostic annotations for prostate cancer patients clinically. The ERG rearrangement status in prostatic adenocarcinoma currently cannot be reliably identified from histologic features on H&E-stained slides alone and hence requires ancillary studies such as immunohistochemistry (IHC), fluorescent in situ hybridization (FISH) or next generation sequencing (NGS) for identification. Methods: Objective: We accordingly sought to develop a deep learning-based algorithm to identify ERG rearrangement status in prostatic adenocarcinoma based on digitized slides of H&E morphology alone. Design: Setting, and Participants: Whole slide images from 392 in-house and TCGA cases were employed and annotated using QuPath. Image patches of 224 × 224 pixel were exported at 10 ×, 20 ×, and 40 × for input into a deep learning model based on MobileNetV2 convolutional neural network architecture pre-trained on ImageNet. A separate model was trained for each magnification. Training and test datasets consisted of 261 cases and 131 cases, respectively. The output of the model included a prediction of ERG-positive (ERG rearranged) or ERG-negative (ERG not rearranged) status for each input patch. Outcome measurements and statistical analysis: Various accuracy measurements including area under the curve (AUC) of the receiver operating characteristic (ROC) curves were used to evaluate the deep learning model. Results and Limitations: All models showed similar ROC curves with AUC results ranging between 0.82 and 0.85. The sensitivity and specificity of these models were 75.0% and 83.1% (20 × model), respectively. Conclusions: A deep learning-based model can successfully predict ERG rearrangement status in the majority of prostatic adenocarcinomas utilizing only H&E-stained digital slides. Such an artificial intelligence-based model can eliminate the need for using extra tumor tissue to perform ancillary studies in order to assess for ERG gene rearrangement in prostatic adenocarcinoma....
Accurate preoperative glioma grading is essential for clinical decision-making and prognostic evaluation. Multiparametric magnetic resonance imaging (mpMRI) serves as an important diagnostic tool for glioma patients due to its superior performance in describing noninvasively the contextual information in tumor tissues. Previous studies achieved promising glioma grading results with mpMRI data utilizing a convolutional neural network (CNN)-based method. However, these studies have not fully exploited and effectively fused the rich tumor contextual information provided in the magnetic resonance (MR) images acquired with different imaging parameters. In this paper, a novel graph convolutional network (GCN)-based mpMRI information fusion module (named MMIF-GCN) is proposed to comprehensively fuse the tumor grading relevant information in mpMRI. Specifically, a graph is constructed according to the characteristics of mpMRI data. The vertices are defined as the glioma grading features of different slices extracted by the CNN, and the edges reflect the distances between the slices in a 3D volume. The proposed method updates the information in each vertex considering the interaction between adjacent vertices. The final glioma grading is conducted by combining the fused information in all vertices. The proposed MMIF-GCN module can introduce an additional nonlinear representation learning step in the process of mpMRI information fusion while maintaining the positional relationship between adjacent slices. Experiments were conducted on two datasets, that is, a public dataset (named BraTS2020) and a private one (named GliomaHPPH2018). The results indicate that the proposed method can effectively fuse the grading information provided in mpMRI data for better glioma grading performance....
Background: For brain tumors, identifying the molecular subtypes from magnetic resonance imaging (MRI) is desirable, but remains a challenging task. Recent machine learning and deep learning (DL) approaches may help the classification/prediction of tumor subtypes through MRIs. However, most of these methods require annotated data with ground truth (GT) tumor areas manually drawn by medical experts. The manual annotation is a time consuming process with high demand on medical personnel. As an alternative automatic segmentation is often used. However, it does not guarantee the quality and could lead to improper or failed segmented boundaries due to differences in MRI acquisition parameters across imaging centers, as segmentation is an ill-defined problem. Analogous to visual object tracking and classification, this paper shifts the paradigm by training a classifier using tumor bounding box areas in MR images. The aim of our study is to see whether it is possible to replace GT tumor areas by tumor bounding box areas (e.g. ellipse shaped boxes) for classification without a significant drop in performance. Method: In patients with diffuse gliomas, training a deep learning classifier for subtype prediction by employing tumor regions of interest (ROIs) using ellipse bounding box versus manual annotated data. Experiments were conducted on two datasets (US and TCGA) consisting of multi-modality MRI scans where the US dataset contained patients with diffuse low-grade gliomas (dLGG) exclusively. Results: Prediction rates were obtained on 2 test datasets: 69.86% for 1p/19q codeletion status on US dataset and 79.50% for IDH mutation/wild-type on TCGA dataset. Comparisons with that of using annotated GT tumor data for training showed an average of 3.0% degradation (2.92% for 1p/19q codeletion status and 3.23% for IDH genotype). Conclusion: Using tumor ROIs, i.e., ellipse bounding box tumor areas to replace annotated GT tumor areas for training a deep learning scheme, cause only a modest decline in performance in terms of subtype prediction. With more data that can be made available, this may be a reasonable trade-off where decline in performance may be counteracted with more data....
Loading....